# Mixture of Experts Model
Qwen3 8B GGUF
Apache-2.0
Qwen3 is the latest generation of large language models in the Tongyi Qianwen series, offering a complete suite of dense models and Mixture of Experts (MoE) models. Based on large-scale training, Qwen3 has achieved breakthrough progress in reasoning capabilities, instruction following, agent functionalities, and multilingual support.
Large Language Model English
Q
prithivMLmods
1,222
1
Qwen3 4B GGUF
Apache-2.0
Qwen3 is the latest generation of large language models in the Tongyi Qianwen series, offering a complete combination of dense models and Mixture of Experts (MoE) models. Based on large-scale training, Qwen3 achieves breakthrough progress in reasoning capabilities, instruction following, agent functions, and multilingual support.
Large Language Model English
Q
prithivMLmods
829
1
Qwen3 235B A22B
Apache-2.0
Qwen3 is the latest generation of large language models in the Qwen series, offering a range of dense and Mixture of Experts (MoE) models. Based on extensive training, Qwen3 has achieved groundbreaking progress in reasoning, instruction following, agent capabilities, and multilingual support.
Large Language Model
Transformers

Q
unsloth
421
2
Qwen3 30B A3B Base
Apache-2.0
Qwen3-30B-A3B-Base is the latest 30.5B parameter-scale Mixture of Experts (MoE) large language model in the Qwen series, supporting 119 languages and 32k context length.
Large Language Model
Transformers

Q
Qwen
9,745
33
Arrowneo AME 4x3B V0.1 MoE
MIT
A Mixture of Experts model designed to serve as the soul of AI virtual anchors, combining code generation, instruction following, and multi-turn dialogue capabilities
Large Language Model Supports Multiple Languages
A
DataPilot
51
3
L3 SnowStorm V1.15 4x8B B
An experimental role-playing-oriented Mixture of Experts model, aiming to achieve performance in role-playing/emotional role-playing tasks that is comparable to or better than Mixtral 8x7B and its fine-tuned versions.
Large Language Model
Transformers English

L
xxx777xxxASD
26
11
Snowflake Arctic Base
Apache-2.0
Snowflake Arctic is a large language model developed by Snowflake AI Research Team, featuring a dense Mixture of Experts (MoE) architecture with 480 billion parameters, specifically designed for efficient text and code generation.
Large Language Model
Transformers

S
Snowflake
166
67
J.O.S.I.E.3 Beta12 7B Slerp
Apache-2.0
J.O.S.I.E.3-Beta12-7B-slerp is a 7B-parameter large language model created by merging Weyaxi/Einstein-v6-7B and argilla/CapybaraHermes-2.5-Mistral-7B models, supporting multilingual interaction and adopting the ChatML prompt format.
Large Language Model
Transformers Supports Multiple Languages

J
Goekdeniz-Guelmez
17
2
Snowflake Arctic Instruct
Apache-2.0
Arctic is a dense Mixture of Experts (MoE) architecture large language model developed by the Snowflake AI Research team, with 480 billion parameters, open-sourced under the Apache-2.0 license.
Large Language Model
Transformers

S
Snowflake
10.94k
354
Mixtral 8x22B Instruct V0.1
Apache-2.0
Mixtral-8x22B-Instruct-v0.1 is a large language model fine-tuned for instructions based on Mixtral-8x22B-v0.1, supporting multiple languages and function calling capabilities.
Large Language Model
Transformers Supports Multiple Languages

M
mistralai
12.80k
723
Zephyr Orpo 141b A35b V0.1 GGUF
Apache-2.0
A 141-billion parameter Mixture of Experts (MoE) model fine-tuned from Mixtral-8x22B-v0.1, with 35 billion active parameters, primarily designed for English text generation tasks
Large Language Model English
Z
MaziyarPanahi
10.04k
29
Laser Dolphin Mixtral 2x7b Dpo
Apache-2.0
A medium-scale Mixture of Experts (MoE) implementation based on Dolphin-2.6-Mistral-7B-DPO-Laser, with an average performance improvement of approximately 1 point in evaluations
Large Language Model
Transformers

L
macadeliccc
133
57
Phixtral 2x2 8
MIT
phixtral-2x2_8 is the first Mixture of Experts (MoE) model built upon two microsoft/phi-2 models, outperforming each individual expert model.
Large Language Model
Transformers Supports Multiple Languages

P
mlabonne
178
148
Featured Recommended AI Models